可解释性和鲁棒性必须在临床应用中整合加速磁共振成像(MRI)重建的机器学习方法。这样做会允许快速高质量的解剖和病理学成像。数据一致性(DC)对于多模态数据的泛化至关重要,以及检测病理学的鲁棒性。这项工作提出了独立复发推理机(CIRIM)的级联,通过展开优化来评估DC,通过梯度下降,并通过设计的术语明确地明确。我们对CIRIM与其他展开的优化方法进行广泛的比较,是端到端变分网络(E2EVN)和轮辋,以及UNET和压缩感测(CS)。评估是分两个阶段完成的。首先,评估关于多次训练的MRI模型的学习,即用{t_1} $ - 加权和平凡对比,以及$ {t_2} $ - 加权膝盖数据。其次,在通过3D Flair MRI数据中重建依赖多发性硬化(MS)患者的3D Flair MRI数据来测试鲁棒性。结果表明,CIRIM在隐式强制执行DC时表现最佳,而E2EVN需要明确制定的DC。 CIRIM在重建临床MS数据时显示出最高病变对比度分辨率。与CS相比,性能提高了大约11%,而重建时间是二十次减少。
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
Neural machine translation (NMT) has become the de-facto standard in real-world machine translation applications. However, NMT models can unpredictably produce severely pathological translations, known as hallucinations, that seriously undermine user trust. It becomes thus crucial to implement effective preventive strategies to guarantee their proper functioning. In this paper, we address the problem of hallucination detection in NMT by following a simple intuition: as hallucinations are detached from the source content, they exhibit encoder-decoder attention patterns that are statistically different from those of good quality translations. We frame this problem with an optimal transport formulation and propose a fully unsupervised, plug-in detector that can be used with any attention-based NMT model. Experimental results show that our detector not only outperforms all previous model-based detectors, but is also competitive with detectors that employ large models trained on millions of samples.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The vulnerabilities of fingerprint-based recognition systems to direct attacks with and without the cooperation of the user are studied. Two different systems, one minutiae-based and one ridge feature-based, are evaluated on a database of real and fake fingerprints. Based on the fingerprint images quality and on the results achieved on different operational scenarios, we obtain a number of statistically significant observations regarding the robustness of the systems.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译
In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from a single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. Making decisions using just the expected future returns -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Therefore, we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time by taking both the future and accrued returns into consideration. In this paper, we propose two novel Monte Carlo tree search algorithms. Firstly, we present a Monte Carlo tree search algorithm that can compute policies for nonlinear utility functions (NLU-MCTS) by optimising the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Secondly, we propose a distributional Monte Carlo tree search algorithm (DMCTS) which extends NLU-MCTS. DMCTS computes an approximate posterior distribution over the utility of the returns, and utilises Thompson sampling during planning to compute policies in risk-aware and multi-objective settings. Both algorithms outperform the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
The performances of braking control systems for robotic platforms, e.g., assisted and autonomous vehicles, airplanes and drones, are deeply influenced by the road-tire friction experienced during the maneuver. Therefore, the availability of accurate estimation algorithms is of major importance in the development of advanced control schemes. The focus of this paper is on the estimation problem. In particular, a novel estimation algorithm is proposed, based on a multi-layer neural network. The training is based on a synthetic data set, derived from a widely used friction model. The open loop performances of the proposed algorithm are evaluated in a number of simulated scenarios. Moreover, different control schemes are used to test the closed loop scenario, where the estimated optimal slip is used as the set-point. The experimental results and the comparison with a model based baseline show that the proposed approach can provide an effective best slip estimation.
translated by 谷歌翻译
The International Atomic Energy Agency (IAEA) stopping power database is a highly valued public resource compiling most of the experimental measurements published over nearly a century. The database-accessible to the global scientific community-is continuously updated and has been extensively employed in theoretical and experimental research for more than 30 years. This work aims to employ machine learning algorithms on the 2021 IAEA database to predict accurate electronic stopping power cross sections for any ion and target combination in a wide range of incident energies. Unsupervised machine learning methods are applied to clean the database in an automated manner. These techniques purge the data by removing suspicious outliers and old isolated values. A large portion of the remaining data is used to train a deep neural network, while the rest is set aside, constituting the test set. The present work considers collisional systems only with atomic targets. The first version of the ESPNN (electronic stopping power neural-network code), openly available to users, is shown to yield predicted values in excellent agreement with the experimental results of the test set.
translated by 谷歌翻译